Comment on "Recurrent neural networks: A constructive algorithm, and its properties"
نویسندگان
چکیده
In their paper [1], Tsoi and Tan present what they call a "canonical form", which they claim to be identical to that proposed in Nerrand et al [2]. They also claim that the algorithm which they present can be applied to any recurrent neural network. In the present comment, we disprove both claims. Back in 1993, Nerrand et al. [2] proposed a general approach to the training of recurrent networks, either adaptively (on-line) or non-adaptively (off-line). One of the main points of that paper was the introduction of the minimal state-space form, or canonical form, defined in relations (4) and (4a) of their paper as: z(n+1) = φ [ z(n), u(n)] (state equation) y(n+1) = ψ[ z(n+1), u(n+1)] (output equation) where z(n) is a state vector, i.e. a minimal set of variables necessary at time n for computing the future output vector y(n+1), the external input vector u(n+1) (control inputs, measured disturbances, ...) being known. A graphic representation of the canonical form is shown in Figure 1, assuming that functions φ and ψ are computed by a single neural network. In appendix (1) of their paper, Nerrand et al. showed how to compute the order of any recurrent network, and, in appendix (2) they derived the canonical form of various recurrent network architectures, which had been proposed by other authors. Since then, researchers of the same group made use of the canonical form in various circumstances ([3], [4], [5]). They derived a general proof of existence of the canonical form for a class of nonlinear discrete-time models including neural networks, and they provided a systematic procedure for deriving their canonical form, which was presented and published on various occasions ([6], [7]).
منابع مشابه
Performance Analysis of a New Neural Network for Routing in Mesh Interconnection Networks
Routing is one of the basic parts of a message passing multiprocessor system. The routing procedure has a great impact on the efficiency of a system. Neural algorithms that are currently in use for computer networks require a large number of neurons. If a specific topology of a multiprocessor network is considered, the number of neurons can be reduced. In this paper a new recurrent neural ne...
متن کاملPerformance Analysis of a New Neural Network for Routing in Mesh Interconnection Networks
Routing is one of the basic parts of a message passing multiprocessor system. The routing procedure has a great impact on the efficiency of a system. Neural algorithms that are currently in use for computer networks require a large number of neurons. If a specific topology of a multiprocessor network is considered, the number of neurons can be reduced. In this paper a new recurrent neural ne...
متن کاملApplication of artificial neural networks on drought prediction in Yazd (Central Iran)
In recent decades artificial neural networks (ANNs) have shown great ability in modeling and forecasting non-linear and non-stationary time series and in most of the cases especially in prediction of phenomena have showed very good performance. This paper presents the application of artificial neural networks to predict drought in Yazd meteorological station. In this research, different archite...
متن کاملConstructing Deterministic Finite-State Automata in Recurrent Neural Networksy
Recurrent neural networks that are trained to behave like deterministic nite-state automata (DFA's) can show deteriorating performance when tested on long strings. This deteriorating performance can be attributed to the instability of the internal representation of the learned DFA states. The use of a sigmoidal discriminant function together with the recurrent structure contribute to this insta...
متن کاملThe “echo state” approach to analysing and training recurrent neural networks – with an Erratum note
The report introduces a constructive learning algorithm for recurrent neural networks, which modifies only the weights to output units in order to achieve the learning task. key words: recurrent neural networks, supervised learning Zusammenfassung. Der Report führt ein konstruktives Lernverfahren für rekurrente neuronale Netze ein, welches zum Erreichen des Lernzieles lediglich die Gewichte der...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- Neurocomputing
دوره 15 شماره
صفحات -
تاریخ انتشار 1997